Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available May 19, 2026
-
Scientists design models to understand phenomena, make predictions, and/or inform decision-making. This study targets models that encapsulate spatially evolving phenomena. Given a model, our objective is to identify the accuracy of the model across all geospatial extents. A scientist may expect these validations to occur at varying spatial resolutions (e.g., states, counties, towns, and census tracts). Assessing a model with all available ground-truth data is infeasible due to the data volumes involved. We propose a framework to assess the performance of models at scale over diverse spatial data collections. Our methodology ensures orchestration of validation workloads while reducing memory strain, alleviating contention, enabling concurrency, and ensuring high throughput. We introduce the notion of a validation budget that represents an upper-bound on the total number of observations that are used to assess the performance of models across spatial extents. The validation budget attempts to capture the distribution characteristics of observations and is informed by multiple sampling strategies. Our design allows us to decouple the validation from the underlying model-fitting libraries to interoperate with models constructed using different libraries and analytical engines; our advanced research prototype currently supports Scikit-learn, PyTorch, and TensorFlow.more » « less
-
Spatial data volumes have increased exponentially over the past couple of decades. This growth has been fueled by networked observational devices, remote sensing sources such as satellites, and simulations that characterize spatiotemporal dynamics of phenomena (e.g., climate). Manual inspection of these data becomes unfeasible at such scales. Fitting models to the data offer an avenue to extract patterns from the data, make predictions, and leverage them to understand phenomena and decision-making. Innovations in deep learning and their ability to capture non-linear interactions between features make them particularly relevant for spatial datasets. However, deep learning workloads tend to be resource-intensive. In this study, we design and contrast transfer learning schemes to substantively alleviate resource requirements for training deep learning models over spatial data at scale. We profile the suitability of our methodology using deep networks built over satellite datasets and gridded data. Empirical benchmarks demonstrate that our spatiotemporally aligned transfer learning scheme ensures ~2.87-5.3 fold reduction in completion times for each model without sacrificing on the accuracy of the models.more » « less
-
Spatial data volumes have grown exponentially over the past several years. The number of domains that spatial data are extensively leveraged include atmospheric sciences, environmental monitoring, ecological modeling, epidemiology, sociology, commerce, and social media among others. These data are often used to understand phenomena and inform decision-making by fitting models to them. In this study, we present our methodology to fit models at scale over spatial data. Our methodology encompasses segmentation, spatial similarity based on the dataset(s) under consideration, and transfer learning schemes that are informed by the spatial similarity to train models faster while utilizing fewer resources. We consider several model fitting algorithms and execution within containerized environments as we profile the suitability of our methodology. Our benchmarks validate the suitability of our methodology to facilitate faster, resource-efficient training of models over spatial data.more » « less
-
Geospatial data collections are now available in a multiplicity of domains. The accompanying data volumes, variety, and diversity of encoding formats within these collections have all continued to grow. These data offer opportunities to extract patterns, understand phenomena, and inform decision making by fitting models to the data. To ensure accuracy and effectiveness, these models need to be constructed at geospatial extents/scopes that are aligned with the nature of decision-making — administrative boundaries such as census tracts, towns, counties, states etc. This entails construction of a large number of models and orchestrating their accompanying resource requirements (CPU, RAM and I/O) within shared computing clusters. In this study, we describe our methodology to facilitate model construction at scale by substantively alleviating resource requirements while preserving accuracy. Our benchmarks demonstrate the suitability of our methodology.more » « less
-
We describe our methodology to support time-series forecasts over spatial datasets using the Prophet library. Our approach underpinned by our transfer learning scheme ensures that model instances capture subtle regional variations and converge faster while using fewer resources. Our benchmarks demonstrate the suitability of our methodology.more » « less
An official website of the United States government
